Distribution of Semantic Features Across Speech & Gesture by Humans and Machines
نویسندگان
چکیده
Participants in face-to-face dialogue have available to them information from a variety of modalities that can help them to understand what is being communicated by a speaker. While much of the information is conveyed by the speaker’s choice of words, his/her intonational patterns, facial expressions and gestures also reflect the semantic and pragmatic content of the intended message. In many cases, different modalities serve to reinforce one another, as when intonation contours serve to mark the most important word in an utterance, or when a speaker aligns the most effortful part of gestures with intonational prominences (Kendon, 1972). In other cases, semantic and pragmatic attributes of the message are distributed across the modalities such that the full communicative intentions of the speaker are interpreted by combining linguistic and para-linguistic information. For example, a deictic gesture accompanying the spoken words "that folder" may substitute for an expression that encodes all of the necessary information in the speech channel, such as "the folder on top of the stack to the left of my computer."
منابع مشابه
Paired Speech and Gesture Generation in Embodied Conversational Agents
Using face-to-face conversation as an interface metaphor, an embodied conversational agent is likely to be easier to use and learn than traditional graphical user interfaces. To make a believable agent that to some extent has the same social and conversational skills as humans do, the embodied conversational agent system must be able to deal with input of the user from different communication m...
متن کاملHow Do We Communicate About Pain? A Systematic Analysis of the Semantic Contribution of Co-speech Gestures in Pain-focused Conversations
The purpose of the present study was to investigate co-speech gesture use during communication about pain. Speakers described a recent pain experience and the data were analyzed using a ‘semantic feature approach’ to determine the distribution of information across gesture and speech. This analysis revealed that a considerable proportion of pain-focused talk was accompanied by gestures, and tha...
متن کاملDeveloping a Semantic Similarity Judgment Test for Persian Action Verbs and Non-action Nouns in Patients With Brain Injury and Determining its Content Validity
Objective: Brain trauma evidences suggest that the two grammatical categories of noun and verb are processed in different regions of the brain due to differences in the complexity of grammatical and semantic information processing. Studies have shown that the verbs belonging to different semantic categories lead to neural activity in different areas of the brain, and action verb processing is r...
متن کاملOpenMM: An Open-Source Multimodal Feature Extraction Tool
The primary use of speech is in face-to-face interactions and situational context and human behavior therefore intrinsically shape and affect communication. In order to usefully model situational awareness, machines must have access to the same streams of information humans have access to. In other words, we need to provide machines with features that represent each communicative modality: face...
متن کاملCommunicative Rhythm in Gesture and Speech
Led by the fundamental role that rhythms apparently play in speech and gestural communication among humans, this study was undertaken to substantiate a biologically motivated model for synchronizing speech and gesture input in human computer interaction. Our approach presents a novel method which conceptualizes a multimodal user interface on the basis of timed agent systems. We use multiple age...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1996